Logical Theories for Agent Introspection
نویسنده
چکیده
Thomas Bolander: Logical Theories for Introspective Agents Artificial intelligence systems (agents) generally have models of the environments they inhabit which they use for representing facts, for reasoning about these facts and for planning actions. Much intelligent behaviour seems to involve an ability to model not only one’s external environment but also oneself and one’s own reasoning. We would therefore wish to be able to construct artificial intelligence systems having such abilities. We call these abilities introspective. In the attempt to construct agents with introspective abilities, a number of theoretical problems is encountered. In particular, problems related to self-reference make it difficult to avoid the possibility of such agents performing self-contradictory reasoning. It is the aim of this thesis to demonstrate how we can construct agents with introspective abilities, while at the same time circumventing the problems imposed by self-reference. In the standard approach taken in artificial intelligence, the model that an agent has of its environment is represented as a set of beliefs. These beliefs are expressed as logical formulas within a formal, logical theory. When the logical theory is expressive enough to allow introspective reasoning, the presence of self-reference causes the theory to be prone to inconsistency. The challenge therefore becomes to construct logical theories supporting introspective reasoning while at the same time ensuring that consistency is retained. In the thesis, we meet this challenge by devising several such logical theories which we prove to be consistent. These theories are all based on first-order predicate logic. To prove our consistency results, we develop a general mathematical framework, suitable for proving a large number of consistency results concerning logical theories involving various kinds of reflection. The principal idea of the framework is to relate self-reference and other problems involved in introspection to properties of certain kinds of graphs. These are graphs representing the semantical dependencies among the logical sentences. The framework is mainly inspired by developments within semantics for logic programming within computational logic and formal theories of truth within philosophical logic. The thesis provides a number of examples showing how the developed theories can be used as reasoning frameworks for agents with introspective abilities.
منابع مشابه
Negative Introspection Is Mysterious
The paper provides a short argument that negative introspection cannot be algorithmic. This result with respect to a principle of belief fits to what we know about provability principles. Autoepistemic reasoning is reasoning the inferences of which depend on representing one’s own state of belief. A cognitive agent engaged in autoepistemic reasoning draws conclusion from introspective beliefs. ...
متن کاملBudget-Constrained Knowledge in Multiagent Systems
The paper introduces a modal logical system for reasoning about knowledge in which information available to agents might be constrained by the available budget. Although the system lacks an equivalent of the standard Negative Introspection axiom from epistemic logic S5, it is proven to be sound and complete with respect to an S5-like Kripke semantics.
متن کاملA Logic of Knowing Why
When we say “I know why he was late”, we know not only the fact that he was late, but also an explanation of this fact. We propose a logical framework of “knowing why” inspired by the existing formal studies on why-questions, scientific explanation, and justification logic. We introduce the Ky i operator into the language of epistemic logic to express “agent i knows why φ” and propose a Kripke-...
متن کاملDiversity of Agents and Their Interaction
Diversity of agents occurs naturally in epistemic logic, and dynamic logics of information update and belief revision. We provide a systematic discussion of different sources of diversity, such as introspection ability, powers of observation, memory capacity, and revision policies, and we show how these can be encoded in dynamic epistemic logics allowing for individual variation among agents. N...
متن کاملSelf-Correcting Unsound Reasoning Agents
This paper introduces a formal framework for relating learning and deduction in reasoning agents. Our goal is to capture imperfect reasoning as well as the progress, through introspection, towards a better reasoning ability. We capture the interleaving between these by a reasoning/deduction connection and we show how this—and related—definition apply to a setting in which agents are modeled by ...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
دوره شماره
صفحات -
تاریخ انتشار 2003